400 research outputs found

    Inhomogeneous Quantum Walks

    Full text link
    We study a natural construction of a general class of inhomogeneous quantum walks (namely walks whose transition probabilities depend on position). Within the class we analyze walks that are periodic in position and show that, depending on the period, such walks can be bounded or unbounded in time; in the latter case we analyze the asymptotic speed. We compare the construction to others in the existing literature. As an example we give a quantum version of a non-irreducible classical walk: the Polya Urn.Comment: 11 page

    Multi-view Face Detection Using Deep Convolutional Neural Networks

    Full text link
    In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose/landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between dis- tribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed methods performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.Comment: in International Conference on Multimedia Retrieval 2015 (ICMR

    Speed-up via Quantum Sampling

    Get PDF
    The Markov Chain Monte Carlo method is at the heart of efficient approximation schemes for a wide range of problems in combinatorial enumeration and statistical physics. It is therefore very natural and important to determine whether quantum computers can speed-up classical mixing processes based on Markov chains. To this end, we present a new quantum algorithm, making it possible to prepare a quantum sample, i.e., a coherent version of the stationary distribution of a reversible Markov chain. Our algorithm has a significantly better running time than that of a previous algorithm based on adiabatic state generation. We also show that our methods provide a speed-up over a recently proposed method for obtaining ground states of (classical) Hamiltonians.Comment: 8 pages, fixed some minor typo

    Decoherence in Quantum Walks on the Hypercube

    Full text link
    We study a natural notion of decoherence on quantum random walks over the hypercube. We prove that in this model there is a decoherence threshold beneath which the essential properties of the hypercubic quantum walk, such as linear mixing times, are preserved. Beyond the threshold, we prove that the walks behave like their classical counterparts.Comment: 7 pages, 3 figures; v2:corrected typos in references; v3:clarified section 2.1; v4:added references, expanded introduction; v5: final journal versio

    Parity and Spin CFT with boundaries and defects

    Full text link
    This paper is a follow-up to [arXiv:2001.05055] in which two-dimensional conformal field theories in the presence of spin structures are studied. In the present paper we define four types of CFTs, distinguished by whether they need a spin structure or not in order to be well-defined, and whether their fields have parity or not. The cases of spin dependence without parity, and of parity without the need of a spin structure, have not, to our knowledge, been investigated in detail so far. We analyse these theories by extending the description of CFT correlators via three-dimensional topological field theory developed in [arXiv:hep-th/0204148] to include parity and spin. In each of the four cases, the defining data are a special Frobenius algebra FF in a suitable ribbon fusion category, such that the Nakayama automorphism of FF is the identity (oriented case) or squares to the identity (spin case). We use the TFT to define correlators in terms of FF and we show that these satisfy the relevant factorisation and single-valuedness conditions. We allow for world sheets with boundaries and topological line defects, and we specify the categories of boundary labels and the fusion categories of line defect labels for each of the four types. The construction can be understood in terms of topological line defects as gauging a possibly non-invertible symmetry. We analyse the case of a Z2\mathbb{Z}_2-symmetry in some detail and provide examples of all four types of CFT, with Bershadsky-Polyakov models illustrating the two new types.Comment: v2 - expanded some discussions in the introduction and in the examples section - 112 page

    SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    Full text link
    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, \textit{Liveness Analysis}, \textit{Unified Tensor Pool}, and \textit{Cost-Aware Recomputation}, all together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in those memory saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 10410^4 basic network layers on a 12GB K40c.Comment: PPoPP '2018: 23nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programmin

    Mixing Times in Quantum Walks on Two-Dimensional Grids

    Full text link
    Mixing properties of discrete-time quantum walks on two-dimensional grids with torus-like boundary conditions are analyzed, focusing on their connection to the complexity of the corresponding abstract search algorithm. In particular, an exact expression for the stationary distribution of the coherent walk over odd-sided lattices is obtained after solving the eigenproblem for the evolution operator for this particular graph. The limiting distribution and mixing time of a quantum walk with a coin operator modified as in the abstract search algorithm are obtained numerically. On the basis of these results, the relation between the mixing time of the modified walk and the running time of the corresponding abstract search algorithm is discussed.Comment: 11 page

    Simulation of Classical Thermal States on a Quantum Computer: A Transfer Matrix Approach

    Get PDF
    We present a hybrid quantum-classical algorithm to simulate thermal states of a classical Hamiltonians on a quantum computer. Our scheme employs a sequence of locally controlled rotations, building up the desired state by adding qubits one at a time. We identify a class of classical models for which our method is efficient and avoids potential exponential overheads encountered by Grover-like or quantum Metropolis schemes. Our algorithm also gives an exponential advantage for 2D Ising models with magnetic field on a square lattice, compared with the previously known Zalka's algorithm.Comment: 5 pages, 3 figures; (new in version 2: added new figure, title changed, rearranged paragraphs
    • …
    corecore